shadow model
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- North America > United States (0.04)
- Asia > China > Hubei Province > Wuhan (0.04)
- Asia > China > Chongqing Province > Chongqing (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Asia > Singapore (0.04)
- Asia > China (0.04)
_NeurIPS2023_CR__Certified_Backdoor_Detection.pdf
Thus, we did not create new threats to society. Moreover, our work provides a new perspective on backdoor defense, as it is the first to address the certification of backdoor detection. This assumption holds in general in practice. In our setting, this is reflected by a small samplewise local probability for the labeled class for most samples used for computing LDP, which may easily lead to a large LDP . In the following, we show that a larger deviation of the learned decision boundary of a binary Bayesian classifier will affect its LDP .
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.35)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
M4I: Multi-modalModels Membership Inference
ROUGE-N scores are the overlapping of n-grams [2] between the generated and referencesequence. Those scores are then averaged overthe whole corpus toreach anoverall quality. For both proposed MMMMI attack methods, shadow models are indispensable. The first hidden layer in the attack model has 256 units and the second hidden layer has20units, bothactivatedbyReLU function. We used resnet-LSTM architecture as the target model architecture.
Gaussian Membership Inference Privacy
We propose a novel and practical privacy notion called $f$-Membership Inference Privacy ($f$-MIP), which explicitly considers the capabilities of realistic adversaries under the membership inference attack threat model. Consequently, $f$-MIP offers interpretable privacy guarantees and improved utility (e.g., better classification accuracy). In particular, we derive a parametric family of $f$-MIP guarantees that we refer to as $\mu$-Gaussian Membership Inference Privacy ($\mu$-GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD). Our analysis highlights that models trained with standard SGD already offer an elementary level of MIP. Additionally, we show how $f$-MIP can be amplified by adding noise to gradient updates.
AttackPilot: Autonomous Inference Attacks Against ML Services With LLM-Based Agents
Wu, Yixin, Wen, Rui, Cui, Chi, Backes, Michael, Zhang, Yang
Inference attacks have been widely studied and offer a systematic risk assessment of ML services; however, their implementation and the attack parameters for optimal estimation are challenging for non-experts. The emergence of advanced large language models presents a promising yet largely unexplored opportunity to develop autonomous agents as inference attack experts, helping address this challenge. In this paper, we propose AttackPilot, an autonomous agent capable of independently conducting inference attacks without human intervention. We evaluate it on 20 target services. The evaluation shows that our agent, using GPT-4o, achieves a 100.0% task completion rate and near-expert attack performance, with an average token cost of only $0.627 per run. The agent can also be powered by many other representative LLMs and can adaptively optimize its strategy under service constraints. We further perform trace analysis, demonstrating that design choices, such as a multi-agent framework and task-specific action spaces, effectively mitigate errors such as bad plans, inability to follow instructions, task context loss, and hallucinations. We anticipate that such agents could empower non-expert ML service providers, auditors, or regulators to systematically assess the risks of ML services without requiring deep domain expertise.